skip to main content


Search for: All records

Creators/Authors contains: "Hozhabrierdi, Pegah"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. In the early stages of a pandemic, epidemiological knowledge of the disease is limited and no vaccination is available. This poses the problem of determining an Early Mitigation Strategy. Previous studies have tackled this problem through finding globally influential nodes that contribute the most to the spread. These methods are often not practical due to their assumptions that (1) accessing the full contact social network is possible; (2) there is an unlimited budget for the mitigation strategy; (3) healthy individuals can be isolated for indefinite amount of time, which in practice can have serious mental health and economic consequences. In this work, we study the problem of developing an early mitigation strategy from a community perspective and propose a dynamic Community-based Mitigation strategy, ComMit. The distinguishing features of ComMit are: (1) It is agnostic to the dynamics of the spread; (2) does not require prior knowledge of contact network; (3) it works within a limited budget; and (4) it enforces bursts of short-term restriction on small communities instead of long-term isolation of healthy individuals. ComMit relies on updated data from test-trace reports and its strategy evolves over time. We have tested ComMit on several real-world social networks. The results of our experiments show that, within a small budget, ComMit can reduce the peak of infection by 73% and shorten the duration of infection by 90%, even for spreads that would reach a steady state of non-zero infections otherwise (e.g., SIS contagion model). 
    more » « less
  2. null (Ed.)
    In many network applications, it may be desirable to conceal certain target nodes from detection by a data collector, who is using a crawling algorithm to explore a network. For example, in a computer network, the network administrator may wish to protect those computers (target nodes) with sensitive information from discovery by a hacker who has exploited vulnerable machines and entered the network. These networks are often protected by hiding the machines (nodes) from external access, and allow only fixed entry points into the system (protection against external attacks). However, in this protection scheme, once one of the entry points is breached, the safety of all internal machines is jeopardized (i.e., the external attack turns into an internal attack). In this paper, we view this problem from the perspective of the data protector. We propose the Node Protection Problem: given a network with known entry points, which edges should be removed/added so as to protect as many target nodes from the data collector as possible? A trivial way to solve this problem would be to simply disconnect either the entry points or the target nodes – but that would make the network non-functional. Accordingly, we impose certain constraints: for each node, only (1 − r) fraction of its edges can be removed, and the resulting network must not be disconnected. We propose two novel scoring mechanisms - the Frequent Path Score and the Shortest Path Score. Using these scores, we propose NetProtect, an algorithm that selects edges to be removed or added so as to best impede the progress of the data collector. We show experimentally that NetProtect outperforms baseline node protection algorithms across several real-world networks. In some datasets, With 1% of the edges removed by NetProtect, we found that the data collector requires up to 6 (4) times the budget compared to the next best baseline in order to discover 5 (50) nodes. 
    more » « less